134 research outputs found

    A review of research on acoustic detection of heat exchanger tube

    Get PDF
    Leakage in heat exchanger tubes can result in unreliable products and dangerous situations, which could cause great economic losses. Along with fast development of modern acoustic detection technology, using acoustic signals to detect leakage in heat exchange tube has been gradually accepted and considered with great potential by both industrial and research societies. In order to further advance the development of acoustic signal detection technology and investigate better methods for leakage detection in heat exchange tube, in this paper, firstly, we conduct a short overview of the theory of acoustic signal detection on heat exchanger tube, which had already been continuously developed for a few decades by researchers worldwide. Thereafter, we further expound the advantages and limitations of acoustic signal detection technology on heat exchanger tube in four aspects: 1) principles of acoustic signal detection, 2) characteristics of sound wave propagation in heat exchanger tube, 3) methods of leakage detection, and 4) leakage localization in heat exchanger tube

    AI Chain on Large Language Model for Unsupervised Control Flow Graph Generation for Statically-Typed Partial Code

    Full text link
    Control Flow Graphs (CFGs) are essential for visualizing, understanding and analyzing program behavior. For statically-typed programming language like Java, developers obtain CFGs by using bytecode-based methods for compilable code and Abstract Syntax Tree (AST)-based methods for partially uncompilable code. However, explicit syntax errors during AST construction and implicit semantic errors caused by bad coding practices can lead to behavioral loss and deviation of CFGs.To address the issue, we propose a novel approach that leverages the error-tolerant and understanding ability of pre-trained Large Language Models (LLMs) to generate CFGs. Our approach involves a Chain of Thought (CoT) with four steps: structure hierarchy extraction, nested code block extraction, CFG generation of nested code blocks, and fusion of all nested code blocks' CFGs. To address the limitations of the original CoT's single-prompt approach (i.e., completing all steps in a single generative pass), which can result in an ``epic'' prompt with hard-to-control behavior and error accumulation, we break down the CoT into an AI chain with explicit sub-steps. Each sub-step corresponds to a separate AI-unit, with an effective prompt assigned to each unit for interacting with LLMs to accomplish a specific purpose.Our experiments confirmed that our method outperforms existing CFG tools in terms of node and edge coverage, especially for incomplete or erroneous code. We also conducted an ablation experiment and confirmed the effectiveness of AI chain design principles: Hierarchical Task Breakdown, Unit Composition, and Mix of AI Units and Non-AI Units.Our work opens up new possibilities for building foundational software engineering tools based on LLMs, as opposed to traditional program analysis methods

    Prompt Sapper: LLM-Empowered Software Engineering Infrastructure for AI-Native Services

    Full text link
    Foundation models, such as GPT-4, DALL-E have brought unprecedented AI "operating system" effect and new forms of human-AI interaction, sparking a wave of innovation in AI-native services, where natural language prompts serve as executable "code" directly (prompt as executable code), eliminating the need for programming language as an intermediary and opening up the door to personal AI. Prompt Sapper has emerged in response, committed to support the development of AI-native services by AI chain engineering. It creates a large language model (LLM) empowered software engineering infrastructure for authoring AI chains through human-AI collaborative intelligence, unleashing the AI innovation potential of every individual, and forging a future where everyone can be a master of AI innovation. This article will introduce the R\&D motivation behind Prompt Sapper, along with its corresponding AI chain engineering methodology and technical practices

    Contrastive Counterfactual Learning for Causality-aware Interpretable Recommender Systems

    Full text link
    There has been a recent surge in the study of generating recommendations within the framework of causal inference, with the recommendation being treated as a treatment. This approach enhances our understanding of how recommendations influence user behaviour and allows for identification of the factors that contribute to this impact. Many researchers in the field of causal inference for recommender systems have focused on using propensity scores, which can reduce bias but may also introduce additional variance. Other studies have proposed the use of unbiased data from randomized controlled trials, though this approach requires certain assumptions that may be difficult to satisfy in practice. In this paper, we first explore the causality-aware interpretation of recommendations and show that the underlying exposure mechanism can bias the maximum likelihood estimation (MLE) of observational feedback. Given that confounders may be inaccessible for measurement, we propose using contrastive SSL to reduce exposure bias, specifically through the use of inverse propensity scores and the expansion of the positive sample set. Based on theoretical findings, we introduce a new contrastive counterfactual learning method (CCL) that integrates three novel positive sampling strategies based on estimated exposure probability or random counterfactual samples. Through extensive experiments on two real-world datasets, we demonstrate that our CCL outperforms the state-of-the-art methods.Comment: conferenc

    Let's Chat to Find the APIs: Connecting Human, LLM and Knowledge Graph through AI Chain

    Full text link
    API recommendation methods have evolved from literal and semantic keyword matching to query expansion and query clarification. The latest query clarification method is knowledge graph (KG)-based, but limitations include out-of-vocabulary (OOV) failures and rigid question templates. To address these limitations, we propose a novel knowledge-guided query clarification approach for API recommendation that leverages a large language model (LLM) guided by KG. We utilize the LLM as a neural knowledge base to overcome OOV failures, generating fluent and appropriate clarification questions and options. We also leverage the structured API knowledge and entity relationships stored in the KG to filter out noise, and transfer the optimal clarification path from KG to the LLM, increasing the efficiency of the clarification process. Our approach is designed as an AI chain that consists of five steps, each handled by a separate LLM call, to improve accuracy, efficiency, and fluency for query clarification in API recommendation. We verify the usefulness of each unit in our AI chain, which all received high scores close to a perfect 5. When compared to the baselines, our approach shows a significant improvement in MRR, with a maximum increase of 63.9% higher when the query statement is covered in KG and 37.2% when it is not. Ablation experiments reveal that the guidance of knowledge in the KG and the knowledge-guided pathfinding strategy are crucial for our approach's performance, resulting in a 19.0% and 22.2% increase in MAP, respectively. Our approach demonstrates a way to bridge the gap between KG and LLM, effectively compensating for the strengths and weaknesses of both.Comment: Accepted on ASE'202

    Unraveling the hidden function of a stabilizer in a precursor in improving hybrid perovskite film morphology for high efficiency solar cells

    Get PDF
    The morphology of the organometal trihalide perovskite (OTP) plays a critical role in the performance of solar cell devices. Nevertheless it has been frequently reported that the morphology of OTP films tends to be different in different laboratories even with the same film preparation procedure, which makes it very difficult to compare and understand the material and device physics. Here, we unravel a critical role of the H3PO2 stabilizer in HI, which has been largely ignored, in controlling the morphology of the perovskite films. The H3PO2 stabilizer in HI solution introduces MAH2PO2 impurities into the synthesized MAI (non-purified MAI) by reacting with methylamine (MA) aqueous solution. MAH2PO2 impurities can slow down the overall crystallization process of perovskite by forming an intermediate phase of Pb(H2PO2)2. Both MAH2PO2 and Pb(H2PO2)2 impede the fast reaction of PbI2 and MAI, resulting in highly uniform and smooth perovskite films with larger grain sizes. The recrystallization of non-purified MAI can remove the MAH2PO2 impurity and form purified MAI, which however results in rough and non-uniform perovskite films. Uniform and smooth perovskite films can also be obtained by directly adding artificially synthesized MAH2PO2 into the purified MAI precursor. This study also suggests Pb(H2PO2)2 to be a new precursor to formhigh quality perovskite films

    Let's Discover More API Relations: A Large Language Model-based AI Chain for Unsupervised API Relation Inference

    Full text link
    APIs have intricate relations that can be described in text and represented as knowledge graphs to aid software engineering tasks. Existing relation extraction methods have limitations, such as limited API text corpus and affected by the characteristics of the input text.To address these limitations, we propose utilizing large language models (LLMs) (e.g., GPT-3.5) as a neural knowledge base for API relation inference. This approach leverages the entire Web used to pre-train LLMs as a knowledge base and is insensitive to the context and complexity of input texts. To ensure accurate inference, we design our analytic flow as an AI Chain with three AI modules: API FQN Parser, API Knowledge Extractor, and API Relation Decider. The accuracy of the API FQN parser and API Relation Decider module are 0.81 and 0.83, respectively. Using the generative capacity of the LLM and our approach's inference capability, we achieve an average F1 value of 0.76 under the three datasets, significantly higher than the state-of-the-art method's average F1 value of 0.40. Compared to CoT-based method, our AI Chain design improves the inference reliability by 67%, and the AI-crowd-intelligence strategy enhances the robustness of our approach by 26%

    Supply chain knowledge management:a literature review

    Get PDF
    This paper aims to contribute to the debate on the role of knowledge management in supply chain management by reviewing the published literature. A total of 58 selected referred journal articles were systematically analyzed. This review identifies various theoretical and methodological characteristics of the way in which knowledge management applications are proposed in the supply chain context. The review shows that little evidence exists of the positive relation between the use of IT solutions and firms’ performance. Some issues remain unexplored such as the problem of knowledge obsolescence in supply chain management. A deeper understanding of the knowledge accumulation process could give new insights. The paper concludes with some future directions for theory construction and empirical research
    • …
    corecore